As an efficient graph analytical tool, graph neural networks (GNNs) have special properties that are particularly fit for the characteristics and requirements of wireless communications, exhibiting good potential for the advancement of next-generation wireless communications. This article aims to provide a comprehensive overview of the interplay between GNNs and wireless communications, including GNNs for wireless communications (GNN4Com) and wireless communications for GNNs (Com4GNN). In particular, we discuss GNN4Com based on how graphical models are constructed and introduce Com4GNN with corresponding incentives. We also highlight potential research directions to promote future research endeavors for GNNs in wireless communications.
translated by 谷歌翻译
作为图形数据的有效神经网络模型,图形神经网络(GNN)最近找到了针对各种无线优化问题的成功应用程序。鉴于GNN的推理阶段可以自然地以分散的方式实施,因此GNN是下一代无线通信中分散控制/管理的潜在推动力。但是,由于在与GNN的分散推断期间,邻居之间的信息交流可能会发生隐私泄漏。为了解决这个问题,在本文中,我们分析并增强了无线网络中GNN分散推断的隐私。具体来说,我们采用当地的差异隐私作为指标,设计了新颖的隐私信号以及隐私保证的培训算法,以实现保护隐私的推论。我们还定义了SNR私人关系权衡功能,以分析无线网络中使用GNN的分散推理的性能上限。为了进一步提高沟通和计算效率,我们采用了空中计算技术,理论上证明了其在隐私保护方面的优势。通过对合成图数据的大量模拟,我们验证了理论分析,验证提出的隐私无线信号传导和隐私保证培训算法的有效性,并就实际实施提供一些指导。
translated by 谷歌翻译
尽管语义通信对大量任务表现出令人满意的性能,但语义噪声和系统的鲁棒性的影响尚未得到很好的研究。语义噪声是指预期的语义符号和接收到的语义符号之间的误导性,从而导致任务失败。在本文中,我们首先提出了一个框架,用于稳健的端到端语义通信系统来对抗语义噪声。特别是,我们分析了样品依赖性和样本无关的语义噪声。为了打击语义噪声,开发了具有重量扰动的对抗训练,以在训练数据集中纳入带有语义噪声的样品。然后,我们建议掩盖一部分输入,在该输入中,语义噪声经常出现,并通过噪声相关的掩蔽策略设计蒙版vector量化量化的量化自动编码器(VQ-VAE)。我们使用发射器共享的离​​散代码簿和接收器用于编码功能表示。为了进一步提高系统鲁棒性,我们开发了一个功能重要性模块(FIM),以抑制与噪声相关和任务无关的功能。因此,发射器只需要在代码簿中传输这些重要的任务相关功能的索引即可。仿真结果表明,所提出的方法可以应用于许多下游任务,并显着提高针对语义噪声的鲁棒性,并显着减少了传输开销。
translated by 谷歌翻译
深度无形的神经网络(NNS)受到了极大的关注,因为它们的复杂性相对较低。通常,这些深度折​​叠的NN仅限于所有输入的固定深度。但是,收敛所需的最佳层随着不同的输入而变化。在本文中,我们首先开发了一个深层确定性策略梯度(DDPG)驱动的深度无折叠的框架,并针对不同输入进行自适应深度,在该框架中,DDPG学习了可训练的深度NN的可训练参数,而不是由随机梯度更新下降算法直接。具体而言,DDPG的状态,动作和状态过渡分别将优化变量,可训练的参数和架构分别设计为DDPG的状态,动作和状态过渡。然后,使用此框架来处理大量多输入多输出系统中的通道估计问题。具体而言,首先,我们通过离网基准制定了通道估计问题,并开发了稀疏的贝叶斯学习(SBL)基于基于的算法来解决它。其次,将基于SBL的算法展开为一组带有一组可训练参数的层结构。第三,采用了提出的DDPG驱动的深度解释框架来基于基于SBL的算法的展开结构来解决此通道估计问题。为了实现自适应深度,我们设计了停止分数以指示何时停止,这是通道重建误差的函数。此外,提出的框架被扩展到实现一般深度神经网络(DNNS)的适应性深度。仿真结果表明,所提出的算法的表现优于固定深度的常规优化算法和DNN,层数量大多。
translated by 谷歌翻译
图形神经网络(GNN)是图形数据的有效的神经网络模型,广泛用于不同的领域,包括无线通信。与其他神经网络模型不同,GNN可以以分散的方式实现,其中邻居之间的信息交换,使其成为无线通信系统中分散控制的潜在强大的工具。然而,主要的瓶颈是无线频道损伤,其恶化了GNN的预测稳健性。为了克服这个障碍,我们在本文中分析和增强了不同无线通信系统中分散的GNN的鲁棒性。具体地,使用GNN二进制分类器作为示例,我们首先开发一种方法来验证预测是否稳健。然后,我们在未编码和编码的无线通信系统中分析分散的GNN二进制分类器的性能。为了解决不完美的无线传输并增强预测稳健性,我们进一步提出了用于上述两个通信系统的新型重传机制。通过仿真对合成图数据,我们验证了我们的分析,验证了提出的重传机制的有效性,并为实际实施提供了一些见解。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译